OpenAI CEO Sam Altman cautions about potential dangers of artificial intelligence due to ‘misalignments’
On Tuesday, the CEO of OpenAI, the company behind ChatGPT, expressed concerns about the potential dangers of artificial intelligence that keep him up at night. Specifically, he highlighted the risks associated with “subtle societal misalignments” that could potentially lead to disastrous consequences caused by these systems.
Sam Altman, speaking at a world government summit in Dubai via video call, reiterated his call for a body like the International Atomic Energy Agency to oversee artificial intelligence, which is likely to advance faster than the world expects.
“There are some things that are easy to imagine where things really go wrong. And I’m not so interested in killer robots walking down the street when things go wrong,” Altman said. “I’m much more interested in very subtle social anomalies where we just have these systems in and out of society. with particularly bad intentions, things just go terribly wrong.”
However, Altman stressed that the AI industry, such as OpenAI, should not be in the driver’s seat when it comes to crafting regulations for the industry.
“We’re still very much in the discussion stage. So, you know, everybody in the world has a conference. Everybody has an idea, a policy paper, and that’s OK,” Altman said. “I think we’re still in a time where the conversation is needed and healthy, but at some point in the next few years I think we need to move forward. toward an action plan that has real buy-in around the world.”
OpenAI, a San Francisco-based artificial intelligence startup, is one of the industry leaders. The Associated Press has signed an agreement with OpenAI to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft for using its stories without permission to train OpenAI’s chatbots.
OpenAI’s success has made Altman the public face of the rapid commercialization of generative artificial intelligence—and the fear of what the new technology could lead to.
The United Arab Emirates, an autocratic union of seven hereditary sheikhdoms, is seeing signs of this danger. Speech remains firmly under control. These limitations affect the flow of accurate data—the same details AI programs like ChatGPT rely on as machine learning systems to provide answers to users.
Emirates also hosts the Abu Dhabi G42, overseen by the country’s powerful national security adviser. According to experts, G42 has the world’s leading Arabic language artificial intelligence model. The company has been accused of espionage due to its ties to a mobile phone application identified as spyware. It has also faced allegations that it may have secretly collected genetic material from Americans for the Chinese government.
The G42 has said it will cut ties with Chinese suppliers over American concerns. However, the discussion with Altman, moderated by UAE Minister of State for Artificial Intelligence Omar al-Olama, was not about local concerns.
Altman, for his part, said he was pleased to see that schools where teachers feared students would use AI to write papers are now embracing the technology as crucial to their future. But he added that artificial intelligence is still in its infancy.
“I think it’s because of our current technology, which is like … the first cell phone with a black and white screen,” Altman said. “So give us time. But I will say that in a few years it will be much better than it is now. And in a decade it should be pretty significant.”